A second-order globally convergent direct-search method and its worst-case complexity

نویسندگان

  • S. Gratton
  • C. W. Royer
  • L. N. Vicente
چکیده

Direct-search algorithms form one of the main classes of algorithms for smooth unconstrained derivative-free optimization, due to their simplicity and their well-established convergence results. They proceed by iteratively looking for improvement along some vectors or directions. In the presence of smoothness, first-order global convergence comes from the ability of the vectors to approximate the steepest descent direction, which can be quantified by a first-order criticality (cosine) measure. The use of a set of vectors with a positive cosine measure together with the imposition of a sufficient decrease condition to accept new iterates leads to a convergence result as well as a worst-case complexity bound. In this paper, we present a second-order study of a general class of direct-search methods. We start by proving a weak second-order convergence result related to a criticality measure defined along the directions used throughout the iterations. Extensions of this result to obtain a true second-order optimality one are discussed, one possibility being a method using approximate Hessian eigenvectors as directions (which is proved to be truly secondorder globally convergent). Numerically guaranteeing such a convergence can be rather expensive to ensure, as it is indicated by the worst-case complexity analysis provided in this paper, but turns out to be appropriate for some pathological examples.

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

Complexity and global rates of trust-region methods based on probabilistic models

Trust-region algorithms have been proved to globally converge with probability one when the accuracy of the trust-region models is imposed with a certain probability conditioning on the iteration history. In this paper, we study their complexity, providing global rates and worst case complexity bounds on the number of iterations (with overwhelmingly high probability), for both first and second ...

متن کامل

Direct search based on probabilistic feasible descent for bound and linearly constrained problems

Direct search is a methodology for derivative-free optimization whose iterations are characterized by evaluating the objective function using a set of polling directions. In deterministic direct search applied to smooth objectives, these directions must somehow conform to the geometry of the feasible region and typically consist of positive generators of approximate tangent cones (which then re...

متن کامل

The direct extension of ADMM for three-block separable convex minimization models is convergent when one function is strongly convex

The alternating direction method of multipliers (ADMM) is a benchmark for solving a two-block linearly constrained convex minimization model whose objective function is the sum of two functions without coupled variables. Meanwhile, it is known that the convergence is not guaranteed if the ADMM is directly extended to a multiple-block convex minimization model whose objective function has more t...

متن کامل

A Quasi-Newton Approach to Nonsmooth Convex Optimization Problems in Machine Learning

We extend the well-known BFGS quasi-Newton method and its memory-limited variant LBFGS to the optimization of nonsmooth convex objectives. This is done in a rigorous fashion by generalizing three components of BFGS to subdifferentials: the local quadratic model, the identification of a descent direction, and the Wolfe line search conditions. We prove that under some technical conditions, the re...

متن کامل

Smoothing and Worst-Case Complexity for Direct-Search Methods in Nonsmooth Optimization

In the context of the derivative-free optimization of a smooth objective function, it has been shown that the worst case complexity of direct-search methods is of the same order as the one of steepest descent for derivative-based optimization, more precisely that the number of iterations needed to reduce the norm of the gradient of the objective function below a certain threshold is proportiona...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

عنوان ژورنال:

دوره   شماره 

صفحات  -

تاریخ انتشار 2015